精确和高保真力控制对于与人类和未知环境相互作用的新一代机器人至关重要。移动机器人(例如可穿戴设备和腿部机器人)也必须轻巧才能完成其功能。已经提出了静液压传输,作为满足这两个具有挑战性要求的有前途的策略。在以前的出版物中,结果表明,使用磁性执行器(MR)执行器与静水透射率相结合,可提供高功率密度和出色的开环人类机器人相互作用。尽管如此,传输动力学和非线性摩擦仍会降低低频和高频下的开环力保真度。这封信比较了MR-Hydrstortic执行器系统的控制策略,以增加其扭矩保真度,该扭矩屈服于带宽(测量得出的扭矩参考)和透明度(最小化在机器人背后反射到最终效应器的不需要的力)。开发了四种控制方法并通过实验进行比较:(1)具有摩擦补偿的开环控制; (2)非集中压力反馈; (3)压力反馈; (4)LQGI状态反馈。还实施了抖动策略来平滑球螺钉摩擦。结果表明,方法(1),(2)和(3)可以提高性能,但面临妥协,而方法(4)可以同时改善所有指标。这些结果表明,使用控制方案使用束缚架构来改善机器人的力控制性能的潜力,从而解决了传输动力学和摩擦等问题。
translated by 谷歌翻译
可穿戴机器人受到执行器表演的限制,因为它们必须承担自己的电力系统和能源的重量。本文探讨了利用混合模式通过使用液压阀动态重新配置静液压执行器的连接来利用混合模式以轻巧有效的系统来满足多个操作点的想法。分析的机会包括1)在高度齿轮电源或快速电源之间切换,2)动态连接能量蓄能器,3)使用锁定机制进行固定。基于膝盖外骨骼案例研究分析,结果表明,齿轮比之间的切换可以导致更轻,更有效的执行器。此外,结果表明,使用累加器提供预紧力的连续力具有巨大的质量潜力,但如果用作短瞬态的功率助推器,则不会显着降低质量。最后,如果工作周期频繁停止,使用锁定阀可以稍微降低电池质量。提出的多模式方案的操作原理用一氧化碳原型证明。
translated by 谷歌翻译
超级机器人四肢(SRL)是可穿戴的机器人,通过充当同事,到达物体,支撑人的武器等来增强人类能力。但是,现有的SRL缺乏可控制互动力所需的机械背景和带宽作为绘画,操纵脆弱的物体等。具有高带宽的高度背景,而最小化重量则带来了由常规电磁执行器的有限表现施加的重大技术挑战。本文研究了使用磁性(MR)离合器耦合到低摩擦式静液传动的可行性,以提供高功能强大但可轻巧,可控制的SRL。设计和建造了2.7千克二线可穿戴机器人手臂。肩膀和肘关节的设计可提供39和25 nm,运动范围为115和180 {\ deg}。在一氧化基督测试台上进行的实验研究并在分析上进行了验证,即使在与外部阻抗相互作用时,也表明了高力带宽(> 25 Hz),并且能够控制相互作用的能力。此外,研究并通过实验研究了三种力对照方法:开环,闭环力和压力上的闭环。所有三种方法均显示为有效。总体而言,拟议的MR-Hydrstoratic致动系统非常适合与人类和环境相互作用的轻量级SRL,从而增加了无法预测的干扰。
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning. In several aspects, PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined. The learning procedure of PLD solves the optimization problem related to the search for rules (called probabilistic laws), which have a minimal length and relatively high probability. At inference, ensembles of these rules are used for prediction. Probabilistic laws are human-readable and PLD based models are transparent and inherently interpretable. Applications of PLD include classification/clusterization/regression tasks, as well as time series analysis/anomaly detection and adaptive (robotic) control. In this paper, we outline the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
translated by 谷歌翻译
Partial differential equations (PDEs) are widely used for description of physical and engineering phenomena. Some key parameters involved in PDEs, which represents certain physical properties with important scientific interpretations, are difficult or even impossible to be measured directly. Estimation of these parameters from noisy and sparse experimental data of related physical quantities is an important task. Many methods for PDE parameter inference involve a large number of evaluations of numerical solution of PDE through algorithms such as finite element method, which can be time-consuming especially for nonlinear PDEs. In this paper, we propose a novel method for estimating unknown parameters in PDEs, called PDE-Informed Gaussian Process Inference (PIGPI). Through modeling the PDE solution as a Gaussian process (GP), we derive the manifold constraints induced by the (linear) PDE structure such that under the constraints, the GP satisfies the PDE. For nonlinear PDEs, we propose an augmentation method that transfers the nonlinear PDE into an equivalent PDE system linear in all derivatives that our PIGPI can handle. PIGPI can be applied to multi-dimensional PDE systems and PDE systems with unobserved components. The method completely bypasses the numerical solver for PDE, thus achieving drastic savings in computation time, especially for nonlinear PDEs. Moreover, the PIGPI method can give the uncertainty quantification for both the unknown parameters and the PDE solution. The proposed method is demonstrated by several application examples from different areas.
translated by 谷歌翻译
We study the multiclass classification problem where the features come from the mixture of time-homogeneous diffusions. Specifically, the classes are discriminated by their drift functions while the diffusion coefficient is common to all classes and unknown. In this framework, we build a plug-in classifier which relies on nonparametric estimators of the drift and diffusion functions. We first establish the consistency of our classification procedure under mild assumptions and then provide rates of cnvergence under different set of assumptions. Finally, a numerical study supports our theoretical findings.
translated by 谷歌翻译
Many real-world reinforcement learning tasks require control of complex dynamical systems that involve both costly data acquisition processes and large state spaces. In cases where the transition dynamics can be readily evaluated at specified states (e.g., via a simulator), agents can operate in what is often referred to as planning with a \emph{generative model}. We propose the AE-LSVI algorithm for best-policy identification, a novel variant of the kernelized least-squares value iteration (LSVI) algorithm that combines optimism with pessimism for active exploration (AE). AE-LSVI provably identifies a near-optimal policy \emph{uniformly} over an entire state space and achieves polynomial sample complexity guarantees that are independent of the number of states. When specialized to the recently introduced offline contextual Bayesian optimization setting, our algorithm achieves improved sample complexity bounds. Experimentally, we demonstrate that AE-LSVI outperforms other RL algorithms in a variety of environments when robustness to the initial state is required.
translated by 谷歌翻译
In many real-world scenarios, the absence of external knowledge source like Wikipedia restricts question answering systems to rely on latent internal knowledge in limited dialogue data. In addition, humans often seek answers by asking several questions for more comprehensive information. As the dialog becomes more extensive, machines are challenged to refer to previous conversation rounds to answer questions. In this work, we propose to leverage latent knowledge in existing conversation logs via a neural Retrieval-Reading system, enhanced with a TFIDF-based text summarizer refining lengthy conversational history to alleviate the long context issue. Our experiments show that our Retrieval-Reading system can exploit retrieved background knowledge to generate significantly better answers. The results also indicate that our context summarizer significantly helps both the retriever and the reader by introducing more concise and less noisy contextual information.
translated by 谷歌翻译